Bayesian Markov Switching Tensor Regression For Time-Varying Networks
نویسندگان
چکیده
منابع مشابه
Time-Varying Dynamic Bayesian Networks
Directed graphical models such as Bayesian networks are a favored formalism for modeling the dependency structures in complex multivariate systems such as those encountered in biology and neural science. When a system is undergoing dynamic transformation, temporally rewiring networks are needed for capturing the dynamic causal influences between covariates. In this paper, we propose time-varyin...
متن کاملTime Varying Transition Probabilities for Markov Regime Switching Models
We propose a new Markov switching model with time varying probabilities for the transitions. The novelty of our model is that the transition probabilities evolve over time by means of an observation driven model. The innovation of the time varying probability is generated by the score of the predictive likelihood function. We show how the model dynamics can be readily interpreted. We investigat...
متن کاملBayesian Tensor Regression
We propose a Bayesian approach to regression with a scalar response on vector and tensor covariates. Vectorization of the tensor prior to analysis fails to exploit the structure, often leading to poor estimation and predictive performance. We introduce a novel class of multiway shrinkage priors for tensor coefficients in the regression setting and present posterior consistency results under mil...
متن کاملBayesian Markov Regime-Switching Models for Cointegration
This paper introduces a Bayesian Markov regime-switching model that allows the cointegration relationship between two time series to be switched on and off over time. Unlike classical approaches for testing and modeling cointegration, the Bayesian Markov switching method allows for estimation of the regime-specific model parameters via Markov Chain Monte Carlo and generates more reliable estima...
متن کاملTensor Switching Networks
We present a novel neural network algorithm, the Tensor Switching (TS) network, which generalizes the Rectified Linear Unit (ReLU) nonlinearity to tensor-valued hidden units. The TS network copies its entire input vector to different locations in an expanded representation, with the location determined by its hidden unit activity. In this way, even a simple linear readout from the TS representa...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: SSRN Electronic Journal
سال: 2018
ISSN: 1556-5068
DOI: 10.2139/ssrn.3192341